6 research outputs found
Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits
To navigate in an environment safely and autonomously, robots must accurately
estimate where obstacles are and how they move. Instead of using expensive
traditional 3D sensors, we explore the use of a much cheaper, faster, and
higher resolution alternative: programmable light curtains. Light curtains are
a controllable depth sensor that sense only along a surface that the user
selects. We adapt a probabilistic method based on particle filters and
occupancy grids to explicitly estimate the position and velocity of 3D points
in the scene using partial measurements made by light curtains. The central
challenge is to decide where to place the light curtain to accurately perform
this task. We propose multiple curtain placement strategies guided by
maximizing information gain and verifying predicted object locations. Then, we
combine these strategies using an online learning framework. We propose a novel
self-supervised reward function that evaluates the accuracy of current velocity
estimates using future light curtain placements. We use a multi-armed bandit
framework to intelligently switch between placement policies in real time,
outperforming fixed policies. We develop a full-stack navigation system that
uses position and velocity estimates from light curtains for downstream tasks
such as localization, mapping, path-planning, and obstacle avoidance. This work
paves the way for controllable light curtains to accurately, efficiently, and
purposefully perceive and navigate complex and dynamic environments. Project
website: https://siddancha.github.io/projects/active-velocity-estimation/Comment: 9 pages (main paper), 3 pages (references), 9 pages (appendix
EVORA: Deep Evidential Traversability Learning for Risk-Aware Off-Road Autonomy
Traversing terrain with good traction is crucial for achieving fast off-road
navigation. Instead of manually designing costs based on terrain features,
existing methods learn terrain properties directly from data via
self-supervision, but challenges remain to properly quantify and mitigate risks
due to uncertainties in learned models. This work efficiently quantifies both
aleatoric and epistemic uncertainties by learning discrete traction
distributions and probability densities of the traction predictor's latent
features. Leveraging evidential deep learning, we parameterize Dirichlet
distributions with the network outputs and propose a novel uncertainty-aware
squared Earth Mover's distance loss with a closed-form expression that improves
learning accuracy and navigation performance. The proposed risk-aware planner
simulates state trajectories with the worst-case expected traction to handle
aleatoric uncertainty, and penalizes trajectories moving through terrain with
high epistemic uncertainty. Our approach is extensively validated in simulation
and on wheeled and quadruped robots, showing improved navigation performance
compared to methods that assume no slip, assume the expected traction, or
optimize for the worst-case expected cost.Comment: Under review. Journal extension for arXiv:2210.00153. Project
website: https://xiaoyi-cai.github.io/evora